Kronecker Square Roots and the Block Vec Matrix
نویسنده
چکیده
Using the block vec matrix, I give a necessary and sufficient condition for factorization of a matrix into the Kronecker product of two other matrices. As a consequence, I obtain an elementary algorithmic procedure to decide whether a matrix has a square root for the Kronecker product. Introduction My statistician colleague, J.E. Chacón, asked me how to decide if a real given matrix A has a square root for the Kronecker product (i.e., if there exists a B such that A = B ⊗ B) and, in the positive case, how to compute it. His questions were motivated by the fact that, provided that a certain real positive definite symmetric matrix has a Kronecker square root, explicit asymptotic expressions for certain estimator errors could be obtained. See [1], for a discussion of the importance of multivariate kernel density derivative estimation. This note is written mostly due to the lack of a suitable reference for the existence of square roots for the Kronecker product, and it is organized as follows: first of all, I study the problem of the factorization of a matrix into a Kronecker product of two matrices, by giving a necessary and sufficient condition under which this happens (Theorem 3). As a preparation for the main result, I introduce the block vec matrix (Definition 1). Now, the block vec matrix and Theorem 3 solve our problem in a constructive way. 1. Kronecker product factorization Throughout this note N, R, and C denote the sets of non-negative integers, real numbers, and complex numbers, respectively. All matrices considered here have real or complex entries; A denotes the transpose of A and tr(A) denotes its trace. The operator that transforms a matrix into a stacked vector is known as the vec operator (see, [3, Definition 4.2.9] or [6, § 7.5]). If A = (
منابع مشابه
Matrix Algebra , Class Notes ( part 2 )
Convention: Let A be a T×m matrix; the notation vec(A) will mean the Tmelement column vector whose first set of T elements are the first colum of A, that is a.1 using the dot notation for columns; the second set of T elements are those of the second column of A, a.2, and so on. Thus A = [a.1, a.2, · · · , a.m] in the dot notation. An immediate consequences of the above Convention is Vec of a pr...
متن کاملLyapunov Majorants for Perturbation Analysis of Matrix Equations
Introduction and notation. The sensitivity of computational problems is a major factor determining the accuracy of computations in machine arithmetic. It may be revealed and taken into account by the methods of perturbation analysis [14, 6]. Below we consider the technique of Lyapunov majorants for perturbation analysis of algebraic matrix equations F (A, X) = 0 arising in science and engineeri...
متن کاملApproximate graph matching
In this paper, vectors are column vectors. Unless otherwise specified, the norm of a matrix ‖M‖ is the Frobenius norm, defined by ‖M‖ = √∑ i,jM 2 ij . The all ones vector is denoted 1 and the all zeros vector is denoted 0. The all ones matrix is denoted J and the all zeros matrix is denoted O. The n× n identity matrix is denoted In, or simply I if its size is understood from context. The vector...
متن کاملStructured Backward Error Analysis of Linearized Structured Polynomial Eigenvalue Problems
We start by introducing a new class of structured matrix polynomials, namely, the class of MA-structured matrix polynomials, to provide a common framework for many classes of structured matrix polynomials that are important in applications: the classes of (skew-)symmetric, (anti-)palindromic, and alternating matrix polynomials. Then, we introduce the families of MAstructured strong block minima...
متن کاملFast Kronecker Product Kernel Methods via Generalized Vec Trick.
Kronecker product kernel provides the standard approach in the kernel methods' literature for learning from graph data, where edges are labeled and both start and end vertices have their own feature representations. The methods allow generalization to such new edges, whose start and end vertices do not appear in the training data, a setting known as zero-shot or zero-data learning. Such a setti...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- The American Mathematical Monthly
دوره 122 شماره
صفحات -
تاریخ انتشار 2015